Realistic AI-generated images and voice recordings may pose a threat to democracy. (Pexels)News 

If You Allow It, Political Deepfakes Could Manipulate Your Mind

The latest danger to democracy could be realistic AI-generated images and voice recordings, which are just one type of deception that has been around for a long time. Instead of relying on AI to debunk rumors or teaching people to identify fake images, a more effective approach would be to promote critical thinking techniques such as redirecting our focus, evaluating our sources, and self-reflection.

Some of these critical thinking tools fall into the category of “system 2” or slow thinking, as described in the book Thinking, Fast and Slow. AI is good at fooling the quick-thinking “system 1” mode, which often jumps to conclusions.

We can start by refocusing on practices and performance rather than gossip and rumour. What if former President Donald Trump tripped over the word and then accused AI of manipulation? What if President Joe Biden forgot a date? Neither incident says anything about the man’s political credentials or priorities.

Having to think about which pictures are real or fake can be a waste of time and energy. Research suggests that we are terrible at spotting fakes.

“We’re very good at picking up the wrong things,” said computational neuroscientist Tijl Grootswagers from the University of Western Sydney. People usually look for flaws when trying to spot fakes, but real pictures are most likely to have flaws.

People may subconsciously trust the more in-depth images because they are more complete than the real thing, he said. People like and trust faces that are less odd and more symmetrical, so AI-generated images can often appear more attractive and trustworthy than the real thing.

Asking voters to do more research when confronted with social media images or claims is not enough. Social scientists recently made the alarming discovery that people were more likely to believe fake news after doing “research” using Google.

It wasn’t proof that research was bad for people or democracy. The problem was that a lot of people do mindless research. They’re looking for corroborating evidence, which, like everything else on the Internet, is abundant – no matter how crazy the claim.

True inquiry involves questioning whether a particular source should be believed. Is it a reputable news site? An expert who has earned the trust of the public? Real research also means exploring the possibility that what you want to believe may be wrong. One of the most common reasons why rumors are repeated in X but not in the mainstream media is the lack of credible evidence.

Artificial intelligence has made it cheaper and easier than ever to use social media to promote a fake news site by preparing realistic fake people to comment on articles, said Filippo Menczer, a computer scientist and director of Indiana University’s Social Media Observatory.

He has spent years studying the proliferation of fake accounts, called bots, which can have an effect through the psychological principle of social proof — giving the impression that many people like or agree with a person or idea. Early bots were crude, but now, he told me, they can be created to display long, detailed and highly realistic conversations.

But this is still just a new tactic in a very old battle. “You don’t really need sophisticated tools to create misinformation,” said psychologist Gordon Pennycook of Cornell University. People have committed fraud using Photoshop or repurposing real images – such as posting pictures of Syria as Gaza.

Pennycook and I talked about too much tension and too little trust. While there is a danger that too little trust might lead people to doubt things that are real, we agree that there is a greater danger if people are too trusting.

We should really strive for discernment—so that people ask the right kinds of questions. “When people share things on social media, they don’t even think about whether it’s true,” he said. They think more about how sharing it would make them look.

Taking this trend into account might have saved the embarrassment of actor Mark Ruffalo, who recently apologized for sharing an allegedly deeply fake photo that suggested Donald Trump was involved in Jeffrey Epstein’s sexual assault of underage girls.

If AI makes it impossible to trust what we see on TV or on social media, that’s not a bad thing at all, because much of it was unreliable and manipulative long before the recent advances in AI. Decades ago, the advent of television famously made physical attractiveness a much more important factor for all candidates. There are more important criteria on which the vote is based.

Reflecting on politics, questioning sources and second-guessing ourselves requires a slower, more effortful form of human intelligence. But given what’s at stake, it’s worth it.

Related posts

Leave a Comment